最近在体现AI中的研究已经通过使用模拟环境来开发和培训机器人学习方法。然而,使用模拟已经引起了只需要机器人模拟器可以模拟的任务:运动和物理接触的任务。我们呈现IGIBSON 2.0,一个开源仿真环境,通过三个关键创新支持模拟更多样化的家庭任务。首先,IGIBSON 2.0支持对象状态,包括温度,湿度水平,清洁度和切割和切片状态,以涵盖更广泛的任务。其次,IGIBSON 2.0实现了一组谓词逻辑函数,该逻辑函数将模拟器状态映射到烹饪或浸泡等逻辑状态。另外,给定逻辑状态,IGIBSON 2.0可以对满足它的有效物理状态进行示例。此功能可以以最少的努力从用户生成潜在的无限实例。采样机制允许我们的场景在语义有意义的位置中的小对象更密集地填充。第三,IGIBSON 2.0包括虚拟现实(VR)界面,以将人类浸入其场景以收集示威操作。因此,我们可以从这些新型任务中收集人类的示威活动,并使用它们进行模仿学习。我们评估了IGIBSON 2.0的新功能,以实现新的任务的机器人学习,希望能够展示这一新模拟器的潜力来支持体现AI的新研究。 IGIBSON 2.0及其新数据集可在http://svl.stanford.edu/igibson/上公开提供。
translated by 谷歌翻译
Common disabilities like stroke and spinal cord injuries may cause loss of motor function in hands. They can be treated with robot assisted rehabilitation techniques, like continuously opening and closing the hand with help of a robot, in a cheaper, and less time consuming manner than traditional methods. Hand exoskeletons are developed to assist rehabilitation, but their bulky nature brings with it certain challenges. As soft robots use elastomeric and fabric elements rather than heavy links, and operate with pneumatic, hydraulic or tendon based rather than traditional rotary or linear motors, soft hand exoskeletons are deemed a better option in relation to rehabilitation.
translated by 谷歌翻译
Smart retail stores are becoming the fact of our lives. Several computer vision and sensor based systems are working together to achieve such a complex and automated operation. Besides, the retail sector already has several open and challenging problems which can be solved with the help of pattern recognition and computer vision methods. One important problem to be tackled is the planogram compliance control. In this study, we propose a novel method to solve it. The proposed method is based on object detection, planogram compliance control, and focused and iterative search steps. The object detection step is formed by local feature extraction and implicit shape model formation. The planogram compliance control step is formed by sequence alignment via the modified Needleman-Wunsch algorithm. The focused and iterative search step aims to improve the performance of the object detection and planogram compliance control steps. We tested all three steps on two different datasets. Based on these tests, we summarize the key findings as well as strengths and weaknesses of the proposed method.
translated by 谷歌翻译
We demonstrate transfer learning-assisted neural network models for optical matrix multipliers with scarce measurement data. Our approach uses <10\% of experimental data needed for best performance and outperforms analytical models for a Mach-Zehnder interferometer mesh.
translated by 谷歌翻译
We propose a method for in-hand 3D scanning of an unknown object from a sequence of color images. We cast the problem as reconstructing the object surface from un-posed multi-view images and rely on a neural implicit surface representation that captures both the geometry and the appearance of the object. By contrast with most NeRF-based methods, we do not assume that the camera-object relative poses are known and instead simultaneously optimize both the object shape and the pose trajectory. As global optimization over all the shape and pose parameters is prone to fail without coarse-level initialization of the poses, we propose an incremental approach which starts by splitting the sequence into carefully selected overlapping segments within which the optimization is likely to succeed. We incrementally reconstruct the object shape and track the object poses independently within each segment, and later merge all the segments by aligning poses estimated at the overlapping frames. Finally, we perform a global optimization over all the aligned segments to achieve full reconstruction. We experimentally show that the proposed method is able to reconstruct the shape and color of both textured and challenging texture-less objects, outperforms classical methods that rely only on appearance features, and its performance is close to recent methods that assume known camera poses.
translated by 谷歌翻译
We investigate the problem of risk averse robot path planning using the deep reinforcement learning and distributionally robust optimization perspectives. Our problem formulation involves modelling the robot as a stochastic linear dynamical system, assuming that a collection of process noise samples is available. We cast the risk averse motion planning problem as a Markov decision process and propose a continuous reward function design that explicitly takes into account the risk of collision with obstacles while encouraging the robot's motion towards the goal. We learn the risk-averse robot control actions through Lipschitz approximated Wasserstein distributionally robust deep Q-learning to hedge against the noise uncertainty. The learned control actions result in a safe and risk averse trajectory from the source to the goal, avoiding all the obstacles. Various supporting numerical simulations are presented to demonstrate our proposed approach.
translated by 谷歌翻译
本文档描述了Spotify出于学术研究目的发布的葡萄牙语播客数据集。我们概述了如何采样数据,有关集合的一些基本统计数据,以及有关巴西和葡萄牙方言的分发信息的简要信息。
translated by 谷歌翻译
结构分解方法,例如普遍的高树木分解,已成功用于解决约束满意度问题(CSP)。由于可以重复使用分解以求解具有相同约束范围的CSP,因此即使计算本身很难,将资源投资于计算良好的分解是有益的。不幸的是,即使示波器仅略有变化,当前方法也需要计算全新的分解。在本文中,我们迈出了解决CSP $ P $分解的问题的第一步,以使其成为由$ P $修改产生的新CSP $ P'$的有效分解。即使从理论上讲问题很难,我们还是提出并实施了一个有效更新GHD的框架。我们算法的实验评估强烈提出了实际适用性。
translated by 谷歌翻译
我们引入了一种新型技术和相关的高分辨率数据集,旨在精确评估基于无线信号的室内定位算法。该技术实现了基于增强的现实(AR)定位系统,该系统用于注释具有高精度位置数据的无线信号参数数据样本。我们在装饰有AR标记的区域中跟踪实用且低成本的可导航相机设置和蓝牙低能(BLE)信标的位置。我们通过使用冗余数字标记来最大程度地提高基于AR的本地化的性能。相机捕获的视频流经过一系列标记识别,子集选择和过滤操作,以产生高度精确的姿势估计。我们的结果表明,我们可以将AR定位系统的位置误差降低到0.05米以下的速率。然后,将位置数据用于注释BLE数据,这些数据由驻扎在环境中的传感器同时捕获,因此,构建具有接地真相的无线信号数据集,该数据集允许准确评估基于无线信号的本地化系统。
translated by 谷歌翻译
我们提出了一种方法,用于估计具有单个RGB图像的可用3D模型的刚性对象的6DOF姿势。与基于经典对应的方法不同,该方法可以预测输入图像的像素的3D对象坐标,该建议的方法可以预测3D对象坐标在相机frustum中采样的3D查询点。从像素到3D点的移动,这是受到3D重建方法的最新PIFU式方法的启发,可以对整个对象(包括(自我)遮挡部分)进行推理。对于与与像素对齐的图像功能相关的3D查询点,我们训练完全连接的神经网络来预测:(i)相应的3D对象坐标,以及(ii)签名到对象表面的签名距离,首先定义仅适用于地表附近的查询点。我们将该网络实现的映射称为神经通信字段。然后,通过Kabsch-Ransac算法从预测的3D-3D对应关系中稳健地估计对象姿势。所提出的方法在三个BOP数据集上实现了最先进的结果,并且在咬合挑战性案例中表现出了优越。项目网站在:linhuang17.github.io/ncf。
translated by 谷歌翻译